翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Fisher information matrix : ウィキペディア英語版
Fisher information

In mathematical statistics, the Fisher information (sometimes simply called information〔Lehmann & Casella, p. 115〕) is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter ''θ'' of a distribution that models X.
Formally, it is the variance of the score, or the expected value of the observed information. In Bayesian statistics, the asymptotic distribution of the posterior mode depends on the Fisher information and not on the prior (according to the Bernstein–von Mises theorem, which was anticipated by Laplace for exponential families).〔Lucien Le Cam (1986) ''Asymptotic Methods in Statistical Decision Theory'': Pages 336 and 618–621 (von Mises and Bernstein).
〕 The role of the Fisher information in the asymptotic theory of maximum-likelihood estimation was emphasized by the statistician Ronald Fisher (following some initial results by Francis Ysidro Edgeworth). The Fisher information is also used in the calculation of the Jeffreys prior, which is used in Bayesian statistics.
The Fisher-information matrix is used to calculate the covariance matrices associated with maximum-likelihood estimates. It can also be used in the formulation of test statistics, such as the Wald test.
Statistical systems of a scientific nature (physical, biological, etc.) whose likelihood functions obey shift invariance have been shown to obey maximum Fisher information.〔Frieden & Gatenby (2013)〕 The level of the maximum depends upon the nature of the system constraints.
==Definition==
The Fisher information is a way of measuring the amount of information that an observable random variable ''X'' carries about an unknown parameter ''θ'' upon which the probability of ''X'' depends. The probability function for ''X'', which is also the likelihood function for ''θ'', is a function ''f''(''X''; ''θ''); it is the probability mass (or probability density) of the random variable ''X'' conditional on the value of ''θ''. The partial derivative with respect to ''θ'' of the natural logarithm of the likelihood function is called the score.
Under certain regularity conditions,〔(【引用サイトリンク】title=Lectures on statistical inference )〕 it can be shown that the first moment of the score (that is, its expected value) is 0:
:
\operatorname \left(\frac \log f(X;\theta)\right|\theta \right )
=
\operatorname \left(\frac f(X;\theta)}\right|\theta \right )
=
\int \frac f(x;\theta)} f(x;\theta)\; \mathrmx
=

:
=
\int \frac f(x;\theta)\; \mathrmx
=
\frac \int f(x; \theta)\; \mathrmx
=
\frac \; 1 = 0.

The second moment is called the Fisher information:
:
\mathcal(\theta)=\operatorname \left(\left(\frac \log f(X;\theta)\right)^2\right|\theta \right ) = \int \left(\frac \log f(x;\theta)\right)^2 f(x; \theta)\; \mathrmx\,,

where, for any given value of ''θ'', the expression E() denotes the conditional expectation over values for ''X'' with respect to the probability function ''f''(''x''; ''θ'') given ''θ''. Note that 0 \leq \mathcal(\theta) < \infty. A random variable carrying high Fisher information implies that the absolute value of the score is often high. The Fisher information is not a function of a particular observation, as the random variable ''X'' has been averaged out.
Since the expectation of the score is zero, the Fisher information is also the variance of the score.
If is twice differentiable with respect to ''θ'', and under certain regularity conditions, then the Fisher information may also be written as〔Lehmann & Casella, eq. (2.5.16).〕
:
\mathcal(\theta) = - \operatorname \left(\frac \log f(X;\theta)\right|\theta \right )\,,

since
:
\frac \log f(X;\theta)
=
\frac f(X;\theta)}
\;-\;
\left( \frac f(X;\theta)} \right)^2
=
\frac f(X;\theta)}
\;-\;
\left( \frac \log f(X;\theta)\right)^2

and
:
\operatorname \left(\frac f(X;\theta)}\right|\theta \right )
=
\cdots
=
\frac \int f(x; \theta)\; \mathrmx
=
\frac \; 1 = 0.

Thus, the Fisher information is the negative of the expectation of the second derivative with respect to ''θ'' of the natural logarithm of ''f''. Information may be seen to be a measure of the "curvature" of the support curve near the maximum likelihood estimate of ''θ''. A "blunt" support curve (one with a shallow maximum) would have a low negative expected second derivative, and thus low information; while a sharp one would have a high negative expected second derivative and thus high information.
Information is additive, in that the information yielded by two independent experiments is the sum of the information from each experiment separately:
: \mathcal_(\theta) = \mathcal_X(\theta) + \mathcal_Y(\theta).
This result follows from the elementary fact that if random variables are independent, the variance of their sum is the sum of their variances.
In particular, the information in a random sample of size ''n'' is ''n'' times that in a sample of size 1, when observations are independent and identically distributed.
The information provided by a sufficient statistic is the same as that of the sample ''X''. This may be seen by using Neyman's factorization criterion for a sufficient statistic. If ''T''(''X'') is sufficient for ''θ'', then
: f(X;\theta) = g(T(X), \theta) h(X) \!
for some functions ''g'' and ''h''. See sufficient statistic for a more detailed explanation. The equality of information then follows from the following fact:
: \frac \log \left( ;\theta)\right )
= \frac \log \left()
which follows from the definition of Fisher information, and the independence of ''h''(''X'') from ''θ''. More generally, if is a statistic, then
:
\mathcal_T(\theta)
\leq
\mathcal_X(\theta)

with equality if and only if ''T'' is a sufficient statistic.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Fisher information」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.